In [63]:
# Boring preliminaries
%pylab inline
import re
import math
import string
from collections import Counter
from __future__ import division
Before we can do things with words, we need some words. First we need some text, possibly from a file. Then we can break the text into words. I happen to have a big text called big.txt. We can read it, and see how big it is (in characters):
In [64]:
TEXT = file('big.txt').read()
len(TEXT)
Out[64]:
So, six million characters.
Now let's break the text up into words (or more formal-sounding, tokens). For now we'll ignore all the punctuation and numbers, and anything that is not a letter.
In [69]:
def tokens(text):
"List all the word tokens (consecutive letters) in a text. Normalize to lowercase."
return re.findall('[a-z]+', text.lower())
In [73]:
tokens('This is: A test, 1, 2, 3, this is.')
Out[73]:
In [70]:
WORDS = tokens(BIG)
len(WORDS)
Out[70]:
So, a million words. Here are the first 10:
In [68]:
print(WORDS[:10])
The list WORDS
is a list of the words in the TEXT
, but it can also serve as a generative model of text. We know that language is very complicated, but we can create a simplified model of language that captures part of the complexity. In the bag of words model, we ignore the order of words, but maintain their frequency. Think of it this way: take all the words from the text, and throw them into a bag. Shake the bag, and then generating a sentence consists of pulling words out of the bag one at a time. Chances are it won't be grammatical or sensible, but it will have words in roughly the right proportions. Here's a function to sample an n word sentence from a bag of words:
In [82]:
def sample(bag, n=10):
"Sample a random n-word sentence from the model described by the bag of words."
return ' '.join(random.choice(bag) for _ in range(n))
In [85]:
sample(WORDS)
Out[85]:
Another representation for a bag of words is a Counter
, which is a dictionary of {'word': count}
pairs. For example,
In [23]:
Counter(tokens('Is this a test? It is a test!'))
Out[23]:
A Counter
is like a dict
, but with a few extra methods. Let's make a Counter
for the big list of WORDS
and get a feel for what's there:
In [93]:
COUNTS = Counter(WORDS)
print COUNTS.most_common(10)
In [92]:
for w in tokens('the rare and neverbeforeseen words'):
print COUNTS[w], w
In 1935, linguist George Zipf noted that in any big text, the nth most frequent word appears with a frequency of about 1/n of the most frequent word. He get's credit for Zipf's Law, even though Felix Auerbach made the same observation in 1913. If we plot the frequency of words, most common first, on a log-log plot, they should come out as a straight line if Zipf's Law holds. Here we see that it is a fairly close fit:
In [97]:
M = COUNTS['the']
yscale('log'); xscale('log'); title('Frequency of n-th most frequent word and 1/n line.')
plot([c for (w, c) in COUNTS.most_common()])
plot([M/i for i in range(1, len(COUNTS)+1)]);
Given a word w, find the most likely correction c = correct(
w)
.
Approach: Try all candidate words c that are known words that are near w. Choose the most likely one.
How to balance near and likely?
For now, in a trivial way: always prefer nearer, but when there is a tie on nearness, use the word with the highest WORDS
count. Measure nearness by edit distance: the minimum number of deletions, transpositions, insertions, or replacements of characters. By trial and error, we determine that going out to edit distance 2 will give us reasonable results. Then we can define correct(
w)
:
In [110]:
def correct(word):
"Find the best spelling correction for this word."
# Prefer edit distance 0, then 1, then 2; otherwise default to word itself.
candidates = (known(edits0(word)) or
known(edits1(word)) or
known(edits2(word)) or
[word])
return max(candidates, key=COUNTS.get)
The functions known
and edits0
are easy; and edits2
is easy if we assume we have edits1
:
In [111]:
def known(words):
"Return the subset of words that are actually in the dictionary."
return {w for w in words if w in COUNTS}
def edits0(word):
"Return all strings that are zero edits away from word (i.e., just word itself)."
return {word}
def edits2(word):
"Return all strings that are two edits away from this word."
return {e2 for e1 in edits1(word) for e2 in edits1(e1)}
Now for edits1(word)
: the set of candidate words that are one edit away. For example, given "wird"
, this would include "weird"
(inserting an e
) and "word"
(replacing a i
with a o
), and also "iwrd"
(transposing w
and i
; then known
can be used to filter this out of the set of final candidates). How could we get them? One way is to split the original word in all possible places, each split forming a pair of words, (a, b)
, before and after the place, and at each place, either delete, transpose, replace, or insert a letter:
pairs: | Ø+wird w+ird | wi+rd | wir+d | wird+Ø | Notes: (a, b) pair
| deletions: | Ø+ird | w+rd | wi+d | wir+Ø | Delete first char of b
| transpositions: | Ø+iwrd | w+rid | wi+dr | Swap first two chars of b
| replacements: | Ø+?ird | w+?rd | wi+?d | wir+? | Replace char at start of b
| insertions: | Ø+?+wird | w+?+ird | wi+?+rd | wir+?+d | wird+?+Ø | Insert char between a and b
| |
In [100]:
def edits1(word):
"Return all strings that are one edit away from this word."
pairs = splits(word)
deletes = [a+b[1:] for (a, b) in pairs if b]
transposes = [a+b[1]+b[0]+b[2:] for (a, b) in pairs if len(b) > 1]
replaces = [a+c+b[1:] for (a, b) in pairs for c in alphabet if b]
inserts = [a+c+b for (a, b) in pairs for c in alphabet]
return set(deletes + transposes + replaces + inserts)
def splits(word):
"Return a list of all possible (first, rest) pairs that comprise word."
return [(word[:i], word[i:])
for i in range(len(word)+1)]
alphabet = 'abcdefghijklmnopqrstuvwxyz'
In [101]:
splits('wird')
Out[101]:
In [103]:
print edits0('wird')
In [102]:
print edits1('wird')
In [104]:
print len(edits2('wird'))
In [112]:
map(correct, tokens('Speling errurs in somethink. Whutever; unusuel misteakes everyware?'))
Out[112]:
Can we make the output prettier than that?
In [113]:
def correct_text(text):
"Correct all the words within a text, returning the corrected text."
return re.sub('[a-zA-Z]+', correct_match, text)
def correct_match(match):
"Spell-correct word in match, and preserve proper upper/lower/title case."
word = match.group()
return case_of(word)(correct(word.lower()))
def case_of(text):
"Return the case-function appropriate for text: upper, lower, title, or just str."
return (str.upper if text.isupper() else
str.lower if text.islower() else
str.title if text.istitle() else
str)
In [114]:
map(case_of, ['UPPER', 'lower', 'Title', 'CamelCase'])
Out[114]:
In [115]:
correct_text('Speling Errurs IN somethink. Whutever; unusuel misteakes?')
Out[115]:
In [116]:
correct_text('Audiance sayzs: tumblr ...')
Out[116]:
So far so good. You can probably think of a dozen ways to make this better. Here's one: in the text "three, too, one, blastoff!" we might want to correct "too" with "two", even though "too" is in the dictionary. We can do better if we look at a sequence of words, not just an individual word one at a time. But how can we choose the best corrections of a sequence? The ad-hoc approach worked pretty well for single words, but now we could use some real theory ...
We should be able to compute the probability of a word, $P(w)$. We do that with the function pdist
, which takes as input a Counter
(hat is, a bag of words) and returns a function that acts as a probability distribution over all possible words. In a probability distribution the probability of each word is between 0 and 1, and the sum of the probabilities is 1.
In [154]:
def pdist(counter):
"Make a probability distribution, given evidence from a Counter."
N = sum(counter.values())
return lambda x: counter[x]/N
P = pdist(COUNTS)
In [155]:
for w in tokens('"The" is most common word in English'):
print P(w), w
Now, what is the probability of a sequence of words? Use the definition of a joint probability:
$P(w_1 \ldots w_n) = P(w_1) \times P(w_2 \mid w_1) \times P(w_3 \mid w_1 w_2) \ldots \times \ldots P(w_n \mid w_1 \ldots w_{n-1})$
The bag of words model assumes that each word is drawn from the bag independently of the others. This gives us the wrong approximation:
$P(w_1 \ldots w_n) = P(w_1) \times P(w_2) \times P(w_3) \ldots \times \ldots P(w_n)$
How can we compute $P(w_1 \ldots w_n)$? We'll use a different function name, Pwords
, rather than P
, and we compute the product of the individual probabilities:
In [120]:
def Pwords(words):
"Probability of words, assuming each word is independent of others."
return product(Pword(w) for w in words)
def product(nums):
"Multiply the numbers together. (Like `sum`, but with multiplication.)"
result = 1
for x in nums:
result *= x
return result
In [121]:
tests = ['this is a test',
'this is a unusual test',
'this is a neverbeforeseen test']
for test in tests:
print Pwords(tokens(test)), test
Yikes—it seems wrong to give a probability of 0 to the last one; it should just be very small. We'll come back to that later. The other probabilities seem reasonable.
Task: given a sequence of characters with no spaces separating words, recover the sequence of words.
Why? Languages with no word delimiters: 不带空格的词
In English, sub-genres with no word delimiters (spelling errors, URLs).
Approach 1: Enumerate all candidate segementations and choose the one with highest Pwords
Problem: how many segmentations are there for an n-character text?
Approach 2: Make one segmentation, into a first word and remaining characters. If we assume words are independent then we can maximize the probability of the first word adjoined to the best segmentation of the remaining characters.
assert segment('choosespain') == ['choose', 'spain']
segment('choosespain') ==
max(Pwords(['c'] + segment('hoosespain')),
Pwords(['ch'] + segment('oosespain')),
Pwords(['cho'] + segment('osespain')),
Pwords(['choo'] + segment('sespain')),
...
Pwords(['choosespain'] + segment('')))
To make this somewhat efficient, we need to avoid re-computing the segmentations of the remaining characters. This can be done explicitly by dynamic programming or implicitly with memoization. Also, we shouldn't consider all possible lengths for the first word; we can impose a maximum length. What should it be? A little more than the longest word seen so far.
In [156]:
def memo(f):
"Memoize function f, whose args must all be hashable."
cache = {}
def fmemo(*args):
if args not in cache:
cache[args] = f(*args)
return cache[args]
fmemo.cache = cache
return fmemo
In [157]:
max(len(w) for w in COUNTS)
Out[157]:
In [160]:
def splits(text, start=0, L=20):
"Return a list of all (first, rest) pairs; start <= len(first) <= L."
return [(text[:i], text[i:])
for i in range(start, min(len(text), L)+1)]
In [161]:
print splits('word')
print splits('reallylongtext', 1, 4)
In [162]:
@memo
def segment(text):
"Return a list of words that is the most probable segmentation of text."
if not text:
return []
else:
candidates = ([first] + segment(rest)
for (first, rest) in splits(text, 1))
return max(candidates, key=Pwords)
In [163]:
segment('choosespain')
Out[163]:
In [164]:
segment('speedofart')
Out[164]:
In [165]:
decl = ('wheninthecourseofhumaneventsitbecomesnecessaryforonepeople' +
'todissolvethepoliticalbandswhichhaveconnectedthemwithanother' +
'andtoassumeamongthepowersoftheearththeseparateandequalstation' +
'towhichthelawsofnatureandofnaturesgodentitlethem')
In [166]:
print(segment(decl))
In [172]:
Pwords(segment(decl))
Out[172]:
In [171]:
Pwords(segment(decl * 2))
Out[171]:
In [170]:
Pwords(segment(decl * 3))
Out[170]:
That's a problem. We'll come back to it later.
In [173]:
segment('smallandinsignificant')
Out[173]:
In [174]:
segment('largeandinsignificant')
Out[174]:
In [175]:
print(Pwords(['large', 'and', 'insignificant']))
print(Pwords(['large', 'and', 'in', 'significant']))
Summary:
Let's move up from millions to billions and billions of words. Once we have that amount of data, we can start to look at two word sequences, without them being too sparse. I happen to have data files available in the format of "word \t count"
, and bigram data in the form of "word1 word2 \t count"
. Let's arrange to read them in:
In [178]:
def load_counts(filename, sep='\t'):
"""Return a Counter initialized from key-value pairs,
one on each line of filename."""
C = Counter()
for line in open(filename):
key, count = line.split(sep)
C[key] = int(count)
return C
In [180]:
COUNTS1 = load_counts('count_1w.txt')
COUNTS2 = load_counts('count_2w.txt')
P1w = pdist(COUNTS1)
P2w = pdist(COUNTS2)
In [181]:
print len(COUNTS1), sum(COUNTS1.values())/1e9
print len(COUNTS2), sum(COUNTS2.values())/1e9
In [183]:
COUNTS2.most_common(30)
Out[183]:
A less-wrong approximation:
$P(w_1 \ldots w_n) = P(w_1) \times P(w_2 \mid w_1) \times P(w_3 \mid w_2) \ldots \times \ldots P(w_n \mid w_{n-1})$
This is called the bigram model, and is equivalent to taking a text, cutting it up into slips of paper with two words on them, and having multiple bags, and putting each slip into a bag labelled with the first word on the slip. Then, to generate language, we choose the first word from the original single bag of words, and chose all subsequent words from the bag with the label of the previously-chosen word.
Let's start by defining the probability of a single discrete event, given evidence stored in a Counter:
Recall that the less-wrong bigram model approximation to English is:
$P(w_1 \ldots w_n) = P(w_1) \times P(w_2 \mid w_1) \times P(w_3 \mid w_2) \ldots \times \ldots P(w_n \mid w_{n-1})$
where the conditional probability of a word given the previous word is defined as:
$P(w_n \mid w_{n-1}) = P(w_{n-1}w_n) / P(w_{n-1}) $
In [184]:
def Pwords2(words, prev='<S>'):
"The probability of a sequence of words, using bigram data, given prev word."
return product(cPword(w, (prev if (i == 0) else words[i-1]) )
for (i, w) in enumerate(words))
# Change Pwords to use P1w (the bigger dictionary) instead of Pword
def Pwords(words):
"Probability of words, assuming each word is independent of others."
return product(P1w(w) for w in words)
def cPword(word, prev):
"Conditional probability of word, given previous word."
bigram = prev + ' ' + word
if P2w(bigram) > 0 and P1w(prev) > 0:
return P2w(bigram) / P1w(prev)
else: # Average the back-off value and zero.
return P1w(word) / 2
In [185]:
print Pwords(tokens('this is a test'))
print Pwords2(tokens('this is a test'))
print Pwords2(tokens('is test a this'))
To make segment2
, we copy segment
, and make sure to pass around the previous token, and to evaluate probabilities with Pwords2
instead of Pwords
.
In [202]:
@memo
def segment2(text, prev='<S>'):
"Return best segmentation of text; use bigram data."
if not text:
return []
else:
candidates = ([first] + segment2(rest, first)
for (first, rest) in splits(text, 1))
return max(candidates, key=lambda words: Pwords2(words, prev))
In [190]:
print segment2('choosespain')
print segment2('speedofart')
print segment2('smallandinsignificant')
print segment2('largeandinsignificant')
In [193]:
adams = ('faroutintheunchartedbackwatersoftheunfashionableendofthewesternspiral' +
'armofthegalaxyliesasmallunregardedyellowsun')
print segment(adams)
print segment2(adams)
In [194]:
P1w('unregarded')
Out[194]:
In [195]:
tolkein = 'adrybaresandyholewithnothinginittositdownonortoeat'
print segment(tolkein)
print segment2(tolkein)
Conclusion? Bigram model is a little better, but not much. Hundreds of billions of words still not enough. (Why not trillions?) Could be made more efficient.
So far, we've got an intuitive feel for how this all works. But we don't have any solid metrics that quantify the results. Without metrics, we can't say if we are doing well, nor if a change is an improvement. In general, when developing a program that relies on data to help make predictions, it is good practice to divide your data into three sets:
For this program, the training data is the word frequency counts, the development set is the examples like "choosespain"
that we have been playing with, and now we need a test set.
In [201]:
def test_segmenter(segmenter, tests):
"Try segmenter on tests; report failures; return fraction correct."
return sum([test_one_segment(segmenter, test)
for test in tests]), len(tests)
def test_one_segment(segmenter, test):
words = tokens(test)
result = segmenter(cat(words))
correct = (result == words)
if not correct:
print 'expected', words
print 'got ', result
return correct
proverbs = ("""A little knowledge is a dangerous thing
A man who is his own lawyer has a fool for his client
All work and no play makes Jack a dull boy
Better to remain silent and be thought a fool that to speak and remove all doubt;
Do unto others as you would have them do to you
Early to bed and early to rise, makes a man healthy, wealthy and wise
Fools rush in where angels fear to tread
Genius is one percent inspiration, ninety-nine percent perspiration
If you lie down with dogs, you will get up with fleas
Lightning never strikes twice in the same place
Power corrupts; absolute power corrupts absolutely
Here today, gone tomorrow
See no evil, hear no evil, speak no evil
Sticks and stones may break my bones, but words will never hurt me
Take care of the pence and the pounds will take care of themselves
Take care of the sense and the sounds will take care of themselves
The bigger they are, the harder they fall
The grass is always greener on the other side of the fence
The more things change, the more they stay the same
Those who do not learn from history are doomed to repeat it"""
.splitlines())
In [203]:
test_segmenter(segment, proverbs)
Out[203]:
In [204]:
test_segmenter(segment2, proverbs)
Out[204]:
This confirms that both segmenters are very good, and that segment2
is slightly better. There is much more that can be done in terms of the variety of tests, and in measuring statistical significance.
In [205]:
tests = ['this is a test',
'this is a unusual test',
'this is a nongovernmental test',
'this is a neverbeforeseen test',
'this is a zqbhjhsyefvvjqc test']
for test in tests:
print Pwords(tokens(test)), test
The issue here is the finality of a probability of zero. Out of the three 15-letter words, it turns out that "nongovernmental" is in the dictionary, but if it hadn't been, if somehow our corpus of words had missed it, then the probability of that whole phrase would have been zero. It seems that is too strict; there must be some "real" words that are not in our dictionary, so we shouldn't give them probability zero. There is also a question of likelyhood of being a "real" word. It does seem that "neverbeforeseen" is more English-like than "zqbhjhsyefvvjqc", and so perhaps should have a higher probability.
We can address this by assigning a non-zero probability to words that are not in the dictionary. This is even more important when it comes to multi-word phrases (such as bigrams), because it is more likely that a legitimate one will appear that has not been observed before.
We can think of our model as being overly spiky; it has a spike of probability mass wherever a word or phrase occurs in the corpus. What we would like to do is smooth over those spikes so that we get a model that does not depend on the details of our corpus. The process of "fixing" the model is called smoothing.
For example, Laplace was asked what's the probability of the sun rising tomorrow. From data that it has risen $n/n$ times for the last n days, the maximum liklihood estimator is 1. But Laplace wanted to balance the data with the possibility that tomorrow, either it will rise or it won't, so he came up with $(n + 1) / (n + 2)$.
What we know is little, and what we are ignorant of is immense.
— Pierre Simon Laplace, 1749-1827
In [207]:
def pdist_additive_smoothed(counter, c=1):
"""The probability of word, given evidence from the counter.
Add c to the count for each item, plus the 'unknown' item."""
N = sum(counter.values()) # Amount of evidence
Nplus = N + c * (len(counter) + 1) # Evidence plus fake observations
return lambda word: (counter[word] + c) / Nplus
P1w = pdist_additive_smoothed(COUNTS1)
In [208]:
P1w('neverbeforeseen')
Out[208]:
But now there's a problem ... we now have previously-unseen words with non-zero probabilities. And maybe 10-12 is about right for words that are observed in text: that is, if I'm reading a new text, the probability that the next word is unknown might be around 10-12. But if I'm manufacturing 20-letter sequences at random, the probability that one will be a word is much, much lower than 10-12.
Look what happens:
In [209]:
segment('thisisatestofsegmentationofalongsequenceofwords')
Out[209]:
There are two problems:
First, we don't have a clear model of the unknown words. We just say "unknown" but we don't distinguish likely unknown from unlikely unknown. For example, is a 8-character unknown more likely than a 20-character unknown?
Second, we don't take into account evidence from parts of the unknown. For example, "unglobulate" versus "zxfkogultae".
For our next approach, Good - Turing smoothing re-estimates the probability of zero-count words, based on the probability of one-count words (and can also re-estimate for higher-number counts, but that is less interesting).
I. J. Good (1916 - 2009) Alan Turing (1812 - 1954)
So, how many one-count words are there in COUNTS
? (There aren't any in COUNTS1
.) And what are the word lengths of them? Let's find out:
In [225]:
singletons = (w for w in COUNTS if COUNTS[w] == 1)
lengths = map(len, singletons)
Counter(lengths).most_common()
Out[225]:
In [213]:
1357 / sum(COUNTS.values())
Out[213]:
In [229]:
hist(lengths, bins=len(set(lengths)));
In [230]:
def pdist_good_turing_hack(counter, onecounter, base=1/26., prior=1e-8):
"""The probability of word, given evidence from the counter.
For unknown words, look at the one-counts from onecounter, based on length.
This gets ideas from Good-Turing, but doesn't implement all of it.
prior is an additional factor to make unknowns less likely.
base is how much we attenuate probability for each letter beyond longest."""
N = sum(counter.values())
N2 = sum(onecounter.values())
lengths = map(len, [w for w in onecounter if onecounter[w] == 1])
ones = Counter(lengths)
longest = max(ones)
return (lambda word:
counter[word] / N if (word in counter)
else prior * (ones[len(word)] / N2 or
ones[longest] / N2 * base ** (len(word)-longest)))
# Redefine P1w
P1w = pdist_good_turing_hack(COUNTS1, COUNTS)
In [231]:
segment.cache.clear()
segment('thisisatestofsegmentationofaverylongsequenceofwords')
Out[231]:
That was somewhat unsatisfactory. We really had to crank up the prior, specifically because the process of running segment
generates so many non-word candidates (and also because there will be fewer unknowns with respect to the billion-word WORDS1
than with respect to the million-word WORDS
). It would be better to separate out the prior from the word distribution, so that the same distribution could be used for multiple tasks, not just for this one.
Now let's think for a short while about smoothing bigram counts. Specifically, what if we haven't seen a bigram sequence, but we've seen both words individually? For example, to evaluate P("Greenland") in the phrase "turn left at Greenland", we might have three pieces of evidence:
P("Greenland")
P("Greenland" | "at")
P("Greenland" | "left", "at")
Presumably, the first would have a relatively large count, and thus large reliability, while the second and third would have decreasing counts and reliability. With interpolation smoothing we combine all three pieces of evidence, with a linear combination:
$P(w_3 \mid w_1w_2) = c_1 P(w_3) + c_2 P(w_3 \mid w_2) + c_3 P(w_3 \mid w_1w_2)$
How do we choose $c_1, c_2, c_3$? By experiment: train on training data, maximize $c$ values on development data, then evaluate on test data.
However, when we do this, we are saying, with probability $c_1$, that a word can appear anywhere, regardless of previous words. But some words are more free to do that than other words. Consider two words with similar probability:
In [232]:
print P1w('francisco')
print P1w('individuals')
They have similar unigram probabilities but differ in their freedom to be the second word of a bigram:
In [234]:
print [bigram for bigram in COUNTS2 if bigram.endswith('francisco')]
In [235]:
print [bigram for bigram in COUNTS2 if bigram.endswith('individuals')]
Intuitively, words that appear in many bigrams before are more likely to appear in a new, previously unseen bigram. In Kneser-Ney smoothing (Reinhard Kneser, Hermann Ney) we multiply the bigram counts by this ratio. But I won't implement that here, because The Count never covered it.
Let's tackle one more task: decoding secret codes. We'll start with the simplest of codes, a rotation cipher, sometimes called a shift cipher or a Caesar cipher (because this was state-of-the-art crypotgraphy in 100 BC). First, a method to encode:
In [236]:
def rot(msg, n=13):
"Encode a message with a rotation (Caesar) cipher."
return encode(msg, alphabet[n:]+alphabet[:n])
def encode(msg, key):
"Encode a message with a substitution cipher."
table = string.maketrans(upperlower(alphabet), upperlower(key))
return msg.translate(table)
def upperlower(text): return text.upper() + text.lower()
In [237]:
rot('This is a secret message.', 1)
Out[237]:
In [238]:
rot('This is a secret message.')
Out[238]:
In [239]:
rot(rot('This is a secret message.'))
Out[239]:
Now decoding is easy: try all 26 candidates, and find the one with the maximum Pwords:
In [240]:
def decode_rot(secret):
"Decode a secret message that has been encoded with a rotation cipher."
candidates = [rot(secret, i) for i in range(len(alphabet))]
return max(candidates, key=lambda msg: Pwords(tokens(msg)))
In [249]:
msg = 'Who knows the answer?'
secret = rot(msg, 17)
print(secret)
print(decode_rot(secret))
Let's make it a tiny bit harder. When the secret message contains separate words, it is too easy to decode by guessing that the one-letter words are most likely "I" or "a". So what if the encode routine mushed all the letters together:
In [244]:
def encode(msg, key):
"Encode a message with a substitution cipher; remove non-letters."
msg = cat(tokens(msg)) ## Change here
table = string.maketrans(upperlower(alphabet), upperlower(key))
return msg.translate(table)
Now we can decode by segmenting. We change candidates to be a list of segmentations, and still choose the candidate with the best Pwords:
In [245]:
def decode_rot(secret):
"""Decode a secret message that has been encoded with a rotation cipher,
and which has had all the non-letters squeezed out."""
candidates = [segment(rot(secret, i)) for i in range(len(alphabet))]
return max(candidates, key=lambda msg: Pwords(msg))
In [247]:
msg = 'Who knows the answer this time? Anyone? Bueller?'
secret = rot(msg, 19)
print(secret)
print(decode_rot(secret))
In [248]:
candidates = [segment(rot(secret, i)) for i in range(len(alphabet))]
for c in candidates:
print c, Pwords(c)
What about a general substitution cipher? The problem is that there are 26! substitution ciphers, and we can't enumerate all of them. We would need to search through this space. Initially make some guess at a substitution, then swap two letters; if that looks better keep going, if not try something else. This approach solves most substitution cipher problems, although it can take a few minutes on a message of length 100 words or so.
What to do next? Here are some options: